rnmamod: An R Package for Conducting Bayesian Network Meta-analysis with Missing Participants

The development of several R packages for conducting network meta-analysis has enhanced the popularity of this evidence synthesis tool. The available R packages facilitate the implementation of most models to conduct and evaluate network meta-analysis and provide the necessary results, conforming to the PRISMA-NMA statement. The rnmamod package is a novel contribution to conducting aggregate network meta-analysis using Bayesian methods, as it allows addressing missing participants properly in all models, even if a handful of the included studies report this information. Importantly, rnmamod is the first R package to offer a rich, user-friendly visualisation toolkit that turns a “parameter-dense” output of network meta-analysis into several comprehensive graphs. The rnmamod package aids the thorough appraisal and interpretation of the results, the cross-comparison of different models and the manuscript preparation for journal submission.

Loukia M. Spineli https://www.github.com/LoukiaSpin (Midwifery Research and Education Unit) , Chrysostomos Kalyvas https://www.github.com/ckalyvas (Biostatistics and Research Decision Sciences) , Katerina Papadimitropoulou https://www.github.com/Katerina-Pap (Health Economics and Market Access)
2023-08-02

1 Introduction

Evidence-based medicine is the backbone of informed decisions for the benefit of the patients, stemming from a meticulous and judicious use of the available evidence, while taking into account also the clinical experience and patient values (Sackett et al. 1996). However, the medical community is faced daily with several intervention options and dosages, challenging the optimal practice of evidence-based medicine (Lee 2022). Systematic reviews with pairwise meta-analysis summarise the evidence of pairs of interventions, providing fragmented evidence that does not serve the clinical needs. Moreover, evidence in the comparability of different interventions at the trial level is also fragmented, as it is not feasible to compare all intervention options for a condition in one trial. These limitations led to the development and later establishment of network meta-analysis (NMA), also known as multiple treatment comparison, a new generation evidence synthesis tool (Salanti 2012). Network meta-analysis is an extension of pairwise meta-analysis for collecting all relevant pieces of evidence for a specific condition, patient population, and intervention options to provide coherent evidence for all possible intervention comparisons, and allow ordering the investigated interventions from the best to worst option for a specific outcome (Caldwell 2014). Indirect evidence (obtained from different sets of trials sharing a common comparator) plays a central role in the development and prominence of NMA.

Since the introduction of indirect evidence and early development of the relevant methodology (Higgins and Whitehead 1996; Bucher et al. 1997), the NMA framework has undergone substantial progress conceptually and methodologically. The fast-pace publications of relevant methodological articles and systematic reviews with multiple interventions attest to the increasing popularity of NMA in the wide medical and evidence synthesis community (Efthimiou et al. 2016; Petropoulou et al. 2017). Needless to say that the availability of statistical analysis software has been the driving force to the advances and wide dissemination of NMA. A review of the methodology and software for NMA (Efthimiou et al. 2016) listed several statistical software tools used to promote NMA, with the R software (R Core Team 2022) being the most popular, followed by Stata (StataCorp 2021) and SAS software (SAS Institute 2020).

In the last decade, there has been a rise in the R packages for NMA with various functionalities (Dewey and Viechtbauer 2022). These packages can be categorised by, among others, the analysis framework (frequentist, Bayesian, or both); the modeling approach (arm-based, or contrast-based); the scope breadth (narrow, such as addressing part of the NMA framework, or wide, that is, conducting NMA and assessing heterogeneity and inconsistency); and the outcome structure (mixture of aggregate and individual patient data, or aggregate data). Table 1 summarises the R packages on NMA published in the CRAN Task Review and their features based on the categories mentioned above. Most packages have been developed to employ Bayesian methods, contrast-based modeling (trial-specific relative effects, such as log odds ratio, are pooled across the trials), have a wide scope and deal with aggregate data. Most packages with narrow scope consider both analysis frameworks: they do not perform NMA, but use the NMA results (obtained using other R packages or statistical software tools) as an input to provide, for instance, decision-invariant bias-adjustment thresholds and intervals (nmathresh (Phillippo et al. 2018)), various league tables in heatplot style with all intervention comparisons (nmaplateplot (Wang et al. 2021)), or an intervention hierarchy approach tailored to the research question (nmarank (Nikolakopoulou et al. 2021)).

Due to the complexity and the wide scope of NMA, the researchers are faced with a large volume of results, necessary to understand the evidence base, assess the underlying assumptions, evaluate the quality of the estimated parameters (model diagnostics), and properly answer the research question, for instance, concerning the comparative effectiveness of the competing interventions and their hierarchy. To address the challenges associated with the best reporting of NMA results, the PRISMA-NMA statement (Hutton et al. 2015) was developed expanding on the PRISMA statement for pairwise meta-analysis (Page et al. 2021) to provide an extensive checklist with the essential items pertaining to the NMA results, ensuring completeness in the reporting of systematic reviews with multiple interventions. The R packages PRISMAstatement (Wasey 2019) and metagear (Lajeunesse 2021) facilitate the creation of the PRISMA flow chart and the process of article screening and data extraction, conforming to the PRISMA statement (Page et al. 2021), and are also relevant for systematic reviews with multiple interventions. The additional items in the PRISMA-NMA statement that apply to the NMA framework, such as presentation and summary of network geometry, inconsistency assessment, league tables and presentation of intervention hierarchy, are addressed in most R packages either in a targeted manner (e.g., nmaplateplot (Wang et al. 2021), and nmarank (Nikolakopoulou et al. 2021)) or collectively (bnma (Seo and Schmid 2022), netmeta (Rücker et al. 2022), gemtc (van Valkenhoef and Kuiper 2021), and pcnetmeta (Lin et al. 2017)).

Most methodological studies on and systematic reviews with NMA have implemented Bayesian methods (Efthimiou et al. 2016; Petropoulou et al. 2017). The advantages of the Bayesian framework (e.g., flexible modeling, allowance of uncertainty in all model parameters, incorporation of external relevant information and facilitation of probabilistic statements) (Sutton and Abrams 2001), in conjunction with the dominance of the BUGS software (Lunn et al. 2009) during the springtime of the NMA framework, may have contributed to the rising popularity of Bayesian NMA. The numerous R packages on Bayesian NMA also demonstrate the acclaim of Bayesian methods from the evidence synthesis community. The rest of the section pertains to R packages on Bayesian NMA published in the CRAN Task View ‘Meta-Analysis’ (Dewey and Viechtbauer 2022) that feature a wide methodological and reporting scope: bnma (Seo and Schmid 2022), gemtc (van Valkenhoef and Kuiper 2021), pcnetmeta (Lin et al. 2017), and rnmamod (Spineli 2022) (a recent novel contribution).

The R packages bnma (Seo and Schmid 2022), gemtc (van Valkenhoef and Kuiper 2021), and pcnetmeta (Lin et al. 2017) conduct hierarchical NMA using Markov chain Monte Carlo (MCMC) methods through the JAGS program (Plummer 2003). However, they differ in their methodological and reporting breadth to some extent: bnma (Seo and Schmid 2022) and gemtc (van Valkenhoef and Kuiper 2021) have a greater common basis on methods and outputs than pcnetmeta (Lin et al. 2017). This may be ascribed to using the contrast-based modeling approach, which is the established approach to meta-analysis, whilst pcnetmeta (Lin et al. 2017) considers the arm-based modeling approach (arm-specific results, such as log odds, are pooled across the trials), which deviates from the standard meta-analysis practice (Dias and Ades 2016) and is less widespread. Currently, the package pcnetmeta (Lin et al. 2017) does not contain any function to conduct inconsistency evaluation and meta-regression, is limited only to rankograms in terms of hierarchy measures (Salanti et al. 2022), and considers only the trace plots as a visual diagnostic tool. On the contrary, bnma (Seo and Schmid 2022) and gemtc (van Valkenhoef and Kuiper 2021) offer at least one method for inconsistency evaluation, allow conducting meta-regression, and consider a wider variety of hierarchy measures and diagnostic tools. However, all three R packages provide a small-sized toolkit with functions regarding the presentation of the relative treatment effects: a league table for one outcome that appears only in the console, and a forest-plot or table on the relative treatment effects of all comparisons with the selected intervention. Moreover, they rely more on the function print() (the results appear in the console) than visualisation, and present the results mostly in isolation, restricting the ability to gain further insights into the performance of the NMA models and contextualise the results in the light of the strengths and limitations in the analysis.

The limited functionalities of the aforementioned R packages concerning the disposal and content of the NMA results hinder thorough scrutiny, and critical appraisal, likely compromising the quality of conclusions delivered to the end-users of systematic reviews with multiple interventions. Furthermore, undue reliance on the console limits the usability of the results as the R users have to resort to tabulation, afflicting comprehension, especially, when analysing large intervention networks that are naturally associated with an immense amount of results. Alternatively, the R users have to create the functions to obtain the necessary visualisations, a time-consuming process, depending on the R user experience, whilst time and energy could have been put into appraising the results. The R package rnmamod (Spineli 2022), published recently in the Comprehensive R Archive Network (available at https://CRAN.R-project.org/package=rnmamod), aspires to fill this technical gap by offering a rich, dynamic, user-friendly visualisation toolkit that turns an inherently dense output of NMA into several coherent graphs. Originally, the rnmamod package was inspired by the absence of R packages that properly account for (aggregate) missing participants in the analyses underlying the NMA framework (e.g., core model, inconsistency assessment, and meta-regression).

The present article introduces the R package rnmamod that performs Bayesian hierarchical NMA in JAGS through the R package R2jags (Su and Masanao Yajima 2021), while modeling missing participants using one-stage pattern-mixture models (Little 1993). The visualisation toolkit of the package has been developed using the R package ggplot2 (Wickham 2016) to benefit from the flexibility offered in creating and customising quality graphs. The article has the following structure. Section 2 provides an overview of the pattern-mixture models for aggregate binary and continuous outcome data in NMA. Section 3 delineates the architecture of rnmamod, a nd section 4 exemplifies the several functions of the package using examples from published systematic reviews with NMA. Finally, Section 5 concludes with a discussion on the limitations and future developments of the package.

Table 1: Features of R packages for network meta-analysis (CRAN Task View)
Analysis
Modeling approach
Scope breadth
Outcome structure
Package Bayesian Frequentist Contrast Arm Wide Narrow AD AD & IPD
bnma X X X X
gemtc X X X X
metapack X X X X
multinma X X X X
netmeta X X X X
NMADiagT X X X X
nmaINLA X X X X
NMAoutlier X X X X
nmaplateplot X X X X X
nmarank X X X X X
nmathresh X X X X X
pcnetmeta X X X X
Note:
AD, aggregate data; IPD, individual patient data.
multinma uses the probabilistic programming language Stan.
nmaINLA uses integrated nested Laplace approximation.
nmarank is mainly frequentist-driven but can be easily applied to Bayesian results (Papakonstantinou et al. 2022).
nmathresh is mainly Bayesian-driven but can be naturally applied to the frequentist framework (Phillippo et al. 2018).

2 Pattern-mixture models for aggregate binary and continuous outcomes

We briefly introduce the pattern-mixture model, originally proposed by Little (Little 1993), and extend it to a summary binary and continuous outcome in the evidence synthesis framework. The pattern-mixture model distinguishes the participants to those completing and those leaving the assigned intervention arm prematurely for several reasons. The former are called completers and the latter missing participants. There is information only on the outcome of the completers for remaining to the assigned intervention until trial completion. If missing participants are not followed-up after leaving the trial, which is usually the case, their outcome can only be hypothesised with some uncertainty; hence, we can determine a distribution of possible values to describe the hypothetical outcome of missing participants in the assigned intervention. Ideally, this distribution should be elicited using an expert opinion for the investigated outcome and interventions (White et al. 2007). Then, the weighted average of the observed and hypothesised outcomes, using the proportion of completers and missing participants as the corresponding weights, yields the true outcome for all randomised participants receiving the investigated intervention. This corresponds to the intention-to-treat analysis, and it is of particular interest to investigate the impact of different scenarios about the missingness mechanism (implied by the distribution of hypothetical outcome values for the missing participants) to the treatment effect. This sensitivity analysis is at the core of the literature on handling missing data properly (Missing data in randomised controlled trials: A practical guide. 2007; White et al. 2007; The prevention and treatment of missing data in clinical trials panel on handling missing data in clinical trials. 2010).

Consider a set of \(N\) trials collected using a systematic review process. These trials investigate different sets of two or more carefully-selected interventions for a specific target population and clinical condition. We extract information on the number randomised, the number of completers and missing participants, and the measured outcome from each arm of every trial. The pattern-mixture framework models completers and missing participants simultaneously, maintaining the randomised sample, as follows:

\[\begin{aligned} \theta_{ik} = \theta^{c}_{ik} \times (1 - q_{ik}) + \theta^{m}_{ik} \times q_{ik} \end{aligned}\]

where \(\theta_{ik}\) is the true outcome in arm \(k\) of trial \(i\), \(\theta^{c}_{ik}\) and \(\theta^{m}_{ik}\) are the outcomes among the completers and missing participants, respectively (the superscripts \(c\) and \(m\) stand for completers and missing), and \(q_{ik}\) is the proportion of missing participants. It holds that

\[\begin{aligned} \theta_{ik} &= P(I_{ikj} = 1 | M_{ikj} = 1 \cup M_{ikj} = 0) \\ \theta^{c}_{ik} &= P(I_{ikj} = 1 | M_{ikj} = 0) \\ \theta^{m}_{ik} &= P(I_{ikj} = 1 | M_{ikj} = 1) \end{aligned}\]

for a binary outcome, and

\[\begin{aligned} \theta_{ik} &= E(Y_{ikj} | M_{ikj} = 1 \cup M_{ikj} = 0) \\ \theta^{c}_{ik} &= E(Y_{ikj} | M_{ikj} = 0) \\ \theta^{m}_{ik} &= E(Y_{ikj} | M_{ikj} = 1) \end{aligned}\]

for a continuous outcome, with \(I_{ikj}\) and \(M_{ikj}\) being dummy variables referring to whether a participant \(j\) experienced the outcome or left the trial prematurely, respectively, and \(Y_{ikj}\) referring to the continuous outcome of participant \(j\).

It has been suggested in the relevant published literature to replace the missingness parameter \(\theta^{m}_{ik}\) with the following parameters to measure the informative missingness as a function of the outcome in completers and missing participants (White et al. 2008; Mavridis et al. 2015; Turner et al. 2015):

\[\begin{aligned} \phi_{ik} = logit(\theta^{m}_{ik}) - logit(\theta^{c}_{ik}) \end{aligned}\]

the informative missingness odds ratio in the logarithmic scale for binary outcomes, and

\[\begin{aligned} \psi_{ik} = \theta^{m}_{ik} - \theta^{c}_{ik} \end{aligned}\]

the informative missingness difference of means for continuous outcomes. Both informative missingness parameters take values in \({\rm I\!R}\) with zero implying the missing at random assumption (ignorable missingness) and non-zero values indicating the missing not at random assumption (non-ignorable missingness). Essentially, the informative missingness parameters quantify departures from the missing at random assumption. Since these parameters are unknown, the analysts can consider one of the following situations: - assign a fixed value, which corresponds to imputation (Higgins et al. 2008; Turner et al. 2015; Spineli 2019), - assume a distribution with suggested parameter values (White et al. 2008; Mavridis et al. 2015), or - use the Bayesian framework to estimate their posterior distribution (Turner et al. 2015; Spineli 2019).

Typically, a normal distribution is assigned on both informative missingness parameters (Higgins et al. 2008; White et al. 2008; Mavridis et al. 2015; Turner et al. 2015; Spineli 2019). In the Bayesian framework, the analysts assing a prior normal distribution on these parameters and can determine the mean and variance of the normal distribution to be specific to the interventions, trials or trial-arms, fixed, exchangeable or independent across the interventions, trials, or trial-arms (Turner et al. 2015; Spineli 2019).

3 The architecture of rnmamod

Functions on data preparation and model implementation

informative missingness parameters take values in ${missingness parameters take values in ${he run_model() function has a central role in the architecture of the rnmamod package. It is the function of conducting the core NMA model and related analyses to assess the underlying assumptions of NMA. It also comprises the object of most functions to create the necessary visualisations. Initially, run_model() calls the data_preparation() function to prepare the dataset in the proper format to fit the model in JAGS. The dataset is provided in the one-study-per-row format, typical for codes written in the BUGS language. Then run_model() bundles the dataset and the necessary parameters (they have been processed through the missingness_param_prior(), heterogeneity_param_prior(), and baseline_model() functions) to conduct NMA through the prepare_model() function. The prepare_model() function contains the code in BUGS language to conduct a hierarchical one-stage NMA, as published by the NICE Decision Support Unit in a series of tutorial papers on evidence synthesis methods for decision-making (Dias et al. 2013). The missingness_param_prior() and heterogeneity_param_prior() functions process the hyperparameters of the selected prior distribution for the informative missingness parameter and the between-study heterogeneity parameter, respectively, to be read by JAGS. The baseline_model() function is relevant only in the case of a binary outcome. It processes the baseline risk defined by the user or the default option before conducting NMA.

Subsequent analyses associated with the underlying assumptions of NMA are performed by specially devised functions that inherit most arguments from run_model(). Therefore, careful specification of the arguments in run_model() is essential for the contingent functions to yield sensible results and ensure meaningful comparison with the NMA results. These functions refer to the local and global consistency evaluation (run_nodesplit() and run_ume()), network meta-regression (run_metareg()), multiple pairwise meta-analyses (run_series_meta()) and sensitivity analysis to different missingness scenarios (run_sensitivity()) when the number of missing participants has been extracted for all study-arms. The functions run_nodesplit() and run_ume() call the prepare_nodesplit() and prepare_ume() functions, respectively, to fit the node-splitting and the unrelated mean effects models in JAGS. The function improved_ume() is also called to ensure a proper accommodation of the multi-arm trials in the unrelated mean effects model. In line with run_model(), network meta-regression, multiple pairwise meta-analyses, and sensitivity analysis are fitted in JAGS through the prepare_model() function. All model-related functions can be passed as an object to the mcmc_diagnostics() function to generate the diagnostic plots and measures for the monitored model parameters.

Figure 1 illustrates the network of the functions developed to prepare the data and conduct NMA and related analyses. Nodes and links refer to functions and the synergy of two functions. The node’s size indicates the usability of the corresponding function. For instance, run_model() is an over-represented node for having a dual role in the network: it is an object to most functions (e.g., run_nodesplit() and mcmc_diagnostics()) and depends on other functions to operate (e.g., data_preparation() and prepare_model()). The node’s colour indicates the operationality of the function: most functions perform model implementation (green nodes), followed by functions that contain the BUGS code (blue nodes) or process the dataset and prepare specific arguments (purple nodes) for the corresponding model. The baseline_model() function contains all three operationalities, whilst mcmc_diagnostics() offers only a set with MCMC diagnostics.

Figure 1: Network of functions for data preparation and model implementation

The visualisation toolkit

Figure 2 presents the network of visualisation-related functions alongside run_model() and several model-related functions. The functions associated with summarising and presenting the results have a common structure: run_model() and the model-related function of interest are passed as objects into the corresponding arguments. Hence, run_model() comprises the backbone of the network and forms the largest node (Figure 2). The visualisation-related functions are distinguished into the stand-alone and the platform functions. The stand-alone functions are immediately related to generating the relevant graphs. For instance, forestplot_metareg(), and interval_panel_ume() constitute stand-alone functions and return only the intended graph using run_model() together with run_metareg() and run_ume(), respectively, as objects in their arguments. Other stand-alone functions depend on a single function to operate; for example, rankosucra_plot() and kld_plot() use only the run_model() and robustness_index(), respectively, in their arguments. The platform functions host the stand-alone functions and generate complementary tables and further graphs. They are easy to spot in Figure 2, as they are named after the related model, with the plot affixed at the end: nodesplit_plot(), ume_plot(), metareg_plot(), and series_meta_plot(). For instance, metareg_plot() calls scatterplot_sucra() and forestplot_sucra() to return the corresponding intended graphs and prints tables in the console where the effect estimates and predictions from NMA are juxtaposed with those from network meta-regression. Every analysis has an individualised visualisation toolkit, indicated by the functions sharing the same colour node (Figure 2). Only network meta-regression (blue nodes) and conducting separate pairwise meta-analyses (green nodes) share a few stand-alone functions with NMA (grey nodes), namely, league_heatmap() and league_heatmap_pred().

Figure 2: Network of functions for summarising and presenting the analysis results

4 A gallery of tooltips examples

Table ?? prints at the first few rows of the penguins data:

Figure 3 shows an interactive plot of the penguins data, made using the plotly package.

p <- penguins %>% 
  ggplot(aes(x = bill_depth_mm, y = bill_length_mm, 
             color = species)) + 
  geom_point()
ggplotly(p)

Figure 3: A basic interactive plot made with the plotly package on palmer penguin data. Three species of penguins are plotted with bill depth on the x-axis and bill length on the y-axis. When hovering on a point, a tooltip will show the exact value of the bill depth and length for that point, along with the species name.

5 Summary

We have displayed various tooltips that are available in the package ToOoOlTiPs.

CRAN packages used

nmathresh, nmaplateplot, nmarank, PRISMAstatement, metagear, bnma, netmeta, gemtc, pcnetmeta, rnmamod, R2jags, ggplot2, plotly

CRAN Task Views implied by cited packages

Bayesian, MetaAnalysis, MissingData, MixedModels, Phylogenetics, Spatial, TeachingStatistics, WebTechnologies

H. C. Bucher, G. H. Guyatt, L. E. Griffith and S. D. Walter. The results of direct and indirect treatment comparisons in meta-analysis of randomized controlled trials. J Clin Epidemiol, 50(6): 683–691, 1997. DOI 10.1016/s0895-4356(97)00049-8.
D. M. Caldwell. An overview of conducting systematic reviews with network meta-analysis. Syst Rev, 3: 109, 2014. DOI 10.1186/2046-4053-3-109.
M. Dewey and W. Viechtbauer. CRAN Task View: Meta-analysis. 2022. URL https://CRAN.R-project.org/view=MetaAnalysis. Version 2022-11-05.
S. Dias and A. E. Ades. Absolute or relative effects? Arm-based synthesis of trial data. Res Synth Methods, 7(1): 23–28, 2016. DOI 10.1002/jrsm.1184.
S. Dias, A. J. Sutton, A. E. Ades and N. J. Welton. Evidence synthesis for decision making 2: A generalized linear modeling framework for pairwise and network meta-analysis of randomized controlled trials. Med Decis Making, 33(5): 607–617, 2013.
O. Efthimiou, T. P. A. Debray, G. van Valkenhoef, S. Trelle, K. Panayidou, K. G. M. Moons, J. B. Reitsma, A. Shang, G. Salanti and GetReal Methods Review Group. GetReal in network meta-analysis: A review of the methodology. Res Synth Methods, 7(3): 236–263, 2016. DOI 10.1002/jrsm.1195.
J. P. T. Higgins, I. R. White and A. M. Wood. Imputation methods for missing outcome data in meta-analysis of clinical trials. Clin Trials, 5(3): 225–239, 2008. DOI 10.1177/1740774508091600.
J. P. Higgins and A. Whitehead. Borrowing strength from external trials in a meta-analysis. Stat Med, 15(24): 2733–2749, 1996.
B. Hutton, G. Salanti, D. M. Caldwell, A. Chaimani, C. H. Schmid, C. Cameron, J. P. A. Ioannidis, S. Straus, K. Thorlund, J. P. Jansen, et al. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: Checklist and explanations. Ann Intern Med, 162(11): 777–784, 2015. DOI 10.7326/M14-2385.
M. J. Lajeunesse. Metagear: Comprehensive research synthesis tools for systematic reviews and meta-analysis. 2021. URL https://CRAN.R-project.org/package=metagear. R package version 0.7.
A. Lee. The development of network meta-analysis. J R Soc Med, 115(8): 313–321, 2022. DOI 10.1177/01410768221113196.
L. Lin, J. Zhang, J. S. Hodges and H. Chu. Performing arm-based network meta-analysis in R with the pcnetmeta package. Journal of Statistical Software, 80(5): 1–25, 2017. DOI 10.18637/jss.v080.i05.
R. Little. Pattern-mixture models formultivariate incomplete data. J Am Stat Assoc, 88(421): 125–134, 1993. DOI doi.org/10.2307/2290705.
D. Lunn, D. Spiegelhalter, A. Thomas and N. Best. The BUGS project: Evolution, critique and future directions. Stat Med, 28(25): 3049–3067, 2009. DOI 10.1002/sim.3680.
D. Mavridis, I. R. White, J. P. T. Higgins, A. Cipriani and G. Salanti. Allowing for uncertainty due to missing continuous outcome data in pairwise and network meta-analysis. Stat Med, 34(5): 721–741, 2015. DOI 10.1002/sim.6365.
Missing data in randomised controlled trials: A practical guide. Health Technology Assessment Methodology Programme: Birmingham, 2007. URL https://researchonline.lshtm.ac.uk/id/eprint/4018500.
A. Nikolakopoulou, G. Schwarzer and T. Papakonstantinou. Nmarank: Complex hierarchy questions in network meta-analysis. 2021. URL https://CRAN.R-project.org/package=nmarank. R package version 0.2-3.
M. J. Page, J. E. McKenzie, P. M. Bossuyt, I. Boutron, T. C. Hoffmann, C. D. Mulrow, L. Shamseer, J. M. Tetzlaff, E. A. Akl, S. E. Brennan, et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372: n71, 2021. DOI 10.1136/bmj.n71.
M. Petropoulou, A. Nikolakopoulou, A.-A. Veroniki, P. Rios, A. Vafaei, W. Zarin, M. Giannatsi, S. Sullivan, A. C. Tricco, A. Chaimani, et al. Bibliographic study showed improving statistical methodology of network meta-analyses published between 1999 and 2015. J Clin Epidemiol, 82: 20–28, 2017. DOI 10.1016/j.jclinepi.2016.11.002.
D. M. Phillippo, S. Dias, A. E. Ades, V. Didelez and N. J. Welton. Sensitivity of treatment recommendations to bias in network meta-analysis. J R Stat Soc Ser A Stat Soc, 181(3): 843–867, 2018. DOI 10.1111/rssa.12341.
M. Plummer. JAGS: A program for analysis of bayesian graphical models using gibbs sampling. Technische Universität Wien, Vienna, Austria, 2003. URL https://www.R-project.org/conferences/DSC-2003/Proceedings/Plummer.pdf.
R Core Team. R: A Language and Environment for Statistical Computing. Foundation for statistical computing, vienna, austria. 2022. URL https://www.R-project.org/.
G. Rücker, U. Krahn, J. König, O. Efthimiou, A. Davies, T. Papakonstantinou and G. Schwarzer. Netmeta: Network meta-analysis using frequentist methods. 2022. URL https://CRAN.R-project.org/package=netmeta. R package version 2.6-0.
D. L. Sackett, W. M. Rosenberg, J. A. Gray, R. B. Haynes and W. S. Richardson. Evidence based medicine: What it is and what it isn’t. BMJ, 312(7023): 71–72, 1996. DOI 10.1136/bmj.312.7023.71.
G. Salanti. Indirect and mixed-treatment comparison, network, or multiple-treatments meta-analysis: Many names, many benefits, many concerns for the next generation evidence synthesis tool. Res Synth Methods, 3(2): 80–97, 2012. DOI 10.1002/jrsm.1037.
G. Salanti, A. Nikolakopoulou, O. Efthimiou, D. Mavridis, M. Egger and I. R. White. Introducing the treatment hierarchy question in network meta-analysis. Am J Epidemiol, 191(5): 930–938, 2022. DOI 10.1093/aje/kwab278.
SAS Institute. The SAS System for Windows. Release 9.4. Cary, NC: SAS inst. 2020. URL https://www.sas.com.
M. Seo and C. Schmid. Bnma: Bayesian network meta-analysis using ’JAGS’. 2022. URL https://CRAN.R-project.org/package=bnma. R package version 1.5.0.
L. M. Spineli. An empirical comparison of bayesian modelling strategies for missing binary outcome data in network meta-analysis. BMC Med Res Methodol, 19(1): 86, 2019. DOI 10.1186/s12874-019-0731-y.
L. M. Spineli. Rnmamod: Bayesian network meta-analysis with missing participants. 2022. URL https://CRAN.R-project.org/package=rnmamod. R package version 0.3.0.
StataCorp. Stata Statistical Software: Release 17. College station, TX: StataCorp LLC. 2021. URL http://www.stata.com.
Y.-S. Su and Masanao Yajima. R2jags: Using r to run ’JAGS’. 2021. URL https://CRAN.R-project.org/package=R2jags. R package version 0.7-1.
A. J. Sutton and K. R. Abrams. Bayesian methods in meta-analysis and evidence synthesis. Stat Methods Med Res, 10(4): 277–303, 2001. DOI 10.1177/096228020101000404.
The prevention and treatment of missing data in clinical trials panel on handling missing data in clinical trials. Division of Behavioral; Social Sciences; Education, Washington, DC: The National Academies Press, 2010. URL www.nap.edu.
N. L. Turner, S. Dias, A. E. Ades and N. J. Welton. A bayesian framework to account for uncertainty due to missing binary outcome data in pairwise meta-analysis. Stat Med, 34(12): 2062–2080, 2015. DOI 10.1002/sim.6475.
G. van Valkenhoef and J. Kuiper. Gemtc: Network meta-analysis using bayesian methods. 2021. URL https://CRAN.R-project.org/package=gemtc. R package version 1.0-1.
Z. Wang, L. Lin, S. Zhao and H. Chu. Nmaplateplot: The plate plot for network meta-analysis results. 2021. URL https://CRAN.R-project.org/package=nmaplateplot. R package version Haitao Chu.
J. O. Wasey. PRISMAstatement: Plot flow charts according to the "PRISMA" statement. 2019. URL https://CRAN.R-project.org/package=PRISMAstatement. R package version 1.1.1.
I. R. White, J. Carpenter, S. Evans and S. Schroter. Eliciting and using expert opinions about dropout bias in randomized controlled trials. Clin Trials, 4(2): 125–139, 2007. DOI 10.1177/1740774507077849.
I. R. White, J. P. T. Higgins and A. M. Wood. Allowing for uncertainty due to missing data in meta-analysis—part 1: Two-stage methods. Stat Med, 27(5): 711–727, 2008. DOI 10.1002/sim.3008.
H. Wickham. ggplot2: Elegant graphics for data analysis. Springer-Verlag New York, 2016. URL https://ggplot2.tidyverse.org.

References

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

Citation

For attribution, please cite this work as

Spineli, et al., "rnmamod: An R Package for Conducting Bayesian Network Meta-analysis with Missing Participants", The R Journal, 2023

BibTeX citation

@article{rnmamod-article,
  author = {Spineli, Loukia M. and Kalyvas, Chrysostomos and Papadimitropoulou, Katerina},
  title = {rnmamod: An R Package for Conducting Bayesian Network Meta-analysis with Missing Participants},
  journal = {The R Journal},
  year = {2023},
  issn = {2073-4859},
  pages = {1}
}